Previous Blogs

July 20, 2022
Amazon Extends Alexa’s Reach with New Tools

July 19, 2022
Qualcomm Accelerates Wearables with W5 Platforms

July 12, 2022
New Research Highlights Opportunities and Challenges for Private 5G

June 29, 2022
Arm Aims to Make Mobile Graphics “Immortal-is”

June 14, 2022
Cisco Brings Simplicity and Observability to Networks, Collaboration and Cloud Apps

May 24, 2022
Microsoft Unveils Foundation for AI-Powered Client/Cloud Hybrid Loop

May 18, 2022
Citrix to Integrate with Microsoft Windows 365

May 3, 2022
Dell Expands APEX, Adds Analytics and Data Recovery

April 27, 2022
Arm Simplifies and Modernizes IoT Development with Virtual Hardware

April 21, 2022
Amazon’s Launch of Buy with Prime Highlights Growth of Logistics Business

March 30, 2022
Intel Spices Up PC Market with Arc GPU Launch

March 22, 2022
Nvidia GTC Announcements Confirm it’s a Connected, Multi-Chip World

March 15, 2022
Lenovo and AMD Announcements Highlight Spring PC Refresh

March 8, 2022
The Future of Semiconductors is UCIe

March 2, 2022
Qualcomm Demos Future of Connectivity with WiFi 7 and X70 5G Chips

February 24, 2022
5G Edge Computing Challenges Remain

February 9, 2022
Samsung Raises the Bar with Ultra Versions of S22 and Tab S8

January 20, 2022
US 5G Market Just Got Much More Interesting

January 4, 2022
Qualcomm Extends Automotive Offerings with Snapdragon Ride Vision, Digital Chassis

2021 Blogs

2020 Blogs

2019 Blogs

2018 Blogs

2017 Blogs

2016 Blogs

2015 Blogs

2014 Blogs

2013 Blogs


















TECHnalysis Research Blog

August 10, 2022
IBM Research Tech Makes Edge AI Applications Scalable

By Bob O'Donnell

One of the more intriguing topics driving evolution in the technology world is edge computing. After all, how can you not get excited about a concept that promises to bring distributed intelligence across a multitude of interconnected computing resources all working together to achieve a singular goal?

The real-world problem is that early iterations of edge computing turned out to be a lot more exciting in theory than in practice. Trying to distribute computing tasks across multiple locations and then coordinate those various efforts into a cohesive, meaningful whole is a lot harder than it first appears. This is particularly true when attempting to scale small proof-of-concept (POC) projects into full-scale production.

Issues like the need to move enormous amounts of data from location to location—which, ironically, was supposed to be unnecessary with edge computing—as well as overwhelming demands to label that data are just two of several factors that have conspired to make successful edge computing deployments the exception as opposed to the rule.

IBM’s Research Group, partnering with IBM Sustainability Software and IBM Consulting, has been working to help overcome some of these challenges for several years now. Recently the group has begun to see success in industrial environments like automobile manufacturing by taking a different approach to the problem. In particular, the company has been rethinking how data is being analyzed at various edge locations and how AI models are being shared with other sites.

At car manufacturing plants, for example, most companies have started to use AI-powered visual inspection models that help spot manufacturing flaws that may be difficult or too costly for humans to recognize. Proper use of tools like IBM’s Maximo Applications Suite’s Visual Inspection Solution with Zero D (Defects or Downtime), for example, can both help save car manufacturers significant amounts of money in avoiding defects, and keep the manufacturing lines running as quickly as possible. Given the supply chain-driven constraints that many auto companies have faced recently, that point has become particularly critical lately.

The real trick, however, is getting to the Zero D aspect of the solution because inconsistent results based on wrongly interpreted data can actually have the opposite effect, especially if that wrong data ends up being promulgated across multiple manufacturing sites throughout inaccurate AI models. To avoid costly and unnecessary production line shutdowns, it’s critical to make sure that only the appropriate data is being used to generate the AI models and that the models themselves are checked for accuracy on a regular basis in order to avoid any flaws that wrongly labelled data might create.

This “recalibration” of the AI models is the essence of the secret sauce that IBM Research is bringing to manufacturers and in particular a major US automotive OEM. IBM is working on something they call Out of Distribution (OOD) Detection algorithms that can help determine if the data being used to refine the visual models is outside an acceptable range and might, therefore, cause the model to perform an inaccurate inference on incoming data. Most importantly, it’s doing this work on an automated basis to avoid potential slowdowns that would occur from time-consuming human labelling efforts, as well as enable the work to scale across multiple manufacturing sites.

A byproduct of OOD Detection, called Data Summarization, is the ability to select data for manual inspection, labeling and updating the model. In fact, IBM is working on a 10-100x reduction in the amount of data traffic that currently occurs with many early edge computing deployments. In addition, this approach results in 10x better utilization of person hours spent on manual inspection and labeling by eliminating redundant data (near identical images). In combination with state-of-the-art techniques like OFA (Once For All) model architecture exploration, the company is hoping to reduce the size of the models by as much as 100x as well. This enables more efficient edge computing deployments. Plus, in conjunction with automation technologies designed to more easily and accurately distribute these models and data sets, this enables companies to create AI-powered edge solutions that can successfully scale from smaller POCs to full production deployments.

Efforts like the one being explored at a major US automotive OEM are an important step in the viability of these solutions for markets like manufacturing. However, IBM also sees the opportunity to apply these concepts of refining AI models to many other industries as well, including telcos, retail, industrial automation and even autonomous driving. The trick is to create solutions that work across the inevitable heterogeneity that occurs with edge computing and leverage the unique value that each edge computing site can produce on its own.

As edge computing evolves, it’s clear that it’s not necessarily about collecting and analyzing as much data as possible, but rather finding the right data and using it as wisely as possible.

Here’s a link to the original column: https://www.linkedin.com/pulse/ibm-research-tech-makes-edge-ai-applications-scalable-bob-o-donnell/

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtech.

b